Menu Top
Complete Course of Mathematics
Topic 1: Numbers & Numerical Applications Topic 2: Algebra Topic 3: Quantitative Aptitude
Topic 4: Geometry Topic 5: Construction Topic 6: Coordinate Geometry
Topic 7: Mensuration Topic 8: Trigonometry Topic 9: Sets, Relations & Functions
Topic 10: Calculus Topic 11: Mathematical Reasoning Topic 12: Vectors & Three-Dimensional Geometry
Topic 13: Linear Programming Topic 14: Index Numbers & Time-Based Data Topic 15: Financial Mathematics
Topic 16: Statistics & Probability


Content On This Page
Multiplication Theorem on Probability (for Dependent and Independent Events) Independent Events: Definition and Test Partition of Sample Space
The Law of Total Probability


Probability Theorems: Multiplication Law and Total Probability




Multiplication Theorem on Probability (for Dependent and Independent Events)


Purpose

The **Multiplication Theorem of Probability**, also known as the Multiplication Rule, provides a method to calculate the probability that two or more events will **both occur**. This is the probability of the intersection of the events, often denoted as $P(A \cap B)$ for two events A and B, or $P(A \text{ and } B)$.

The general form of the multiplication rule is derived directly from the definition of conditional probability and is applicable to any two events, whether they are dependent or independent.


General Multiplication Rule (for any two events)

Recall the definition of conditional probability of event A given event B, $P(A|B) = \frac{P(A \cap B)}{P(B)}$, provided $P(B) > 0$. Similarly, $P(B|A) = \frac{P(A \cap B)}{P(A)}$, provided $P(A) > 0$.

We can rearrange these definitions to solve for $P(A \cap B)$.

Multiplying the first equation by $P(B)$ gives:

$$P(A \cap B) = P(B) \cdot P(A|B) \quad \text{if } P(B) > 0$$

... (1)

Multiplying the second equation by $P(A)$ gives:

$$P(A \cap B) = P(A) \cdot P(B|A) \quad \text{if } P(A) > 0$$

... (2)

These are the two forms of the general multiplication rule. They state that the probability that both events A and B will occur is equal to the probability of one event occurring multiplied by the conditional probability of the other event occurring given that the first event has already occurred.

These formulas are always true, provided the conditioning event has a non-zero probability.


Multiplication Rule for Independent Events

Two events A and B are defined as **independent** if the occurrence of one event does not influence or change the probability of the other event occurring. Mathematically, A and B are independent if and only if $P(A|B) = P(A)$ (if $P(B)>0$) or $P(B|A) = P(B)$ (if $P(A)>0$).

If A and B are independent events, we can substitute $P(B|A) = P(B)$ into the general multiplication rule $P(A \cap B) = P(A) \cdot P(B|A)$. This yields the special multiplication rule for independent events:

$$P(A \cap B) = P(A) \cdot P(B) \quad \text{for independent events A and B}$$

... (3)

This means that the probability of two independent events both occurring is simply the product of their individual probabilities.


Extension to Multiple Events

The multiplication rule can be extended to find the probability of the intersection of three or more events.

For three events A, B, and C, the probability that all three occur is given by:

$$P(A \cap B \cap C) = P(A) \cdot P(B|A) \cdot P(C | A \cap B)$$

... (4)

This formula assumes that $P(A)>0$ and $P(A \cap B)>0$. It states that the probability of A, B, and C all occurring is the probability of A, times the probability of B given A, times the probability of C given that both A and B have occurred.

If the three events A, B, and C are **mutually independent** (meaning the outcome of any one or any combination of them does not affect the probability of the remaining events), then the conditional probabilities simplify:

In this case, the multiplication rule for three mutually independent events simplifies to:

$$P(A \cap B \cap C) = P(A) \cdot P(B) \cdot P(C) \quad \text{for mutually independent events A, B, C}$$

... (5)

This extends to any number of mutually independent events: the probability of their intersection is the product of their individual probabilities.


Example

Example 1. Two cards are drawn without replacement from a standard deck of 52 playing cards. What is the probability that both cards drawn are Kings?

Answer:

Given: Drawing two cards without replacement from a 52-card deck.

To Find: Probability that both cards are Kings.

Solution:

Let K1 be the event that the first card drawn is a King.

Let K2 be the event that the second card drawn is a King.

We want to find the probability of the intersection of these two events, $P(K1 \cap K2)$. Since the second card is drawn *without replacement*, the outcome of the first draw affects the possible outcomes and probabilities of the second draw. Thus, these are **dependent events**. We must use the general multiplication rule.

Using the general multiplication rule (Formula 2):

$$P(K1 \cap K2) = P(K1) \cdot P(K2 | K1)$$

... (i)

Calculate $P(K1)$:

Initially, there are 52 cards in the deck, and 4 of them are Kings.

$$P(K1) = \frac{\text{Number of Kings}}{\text{Total number of cards}} = \frac{4}{52}$$

... (ii)

Simplifying, $P(K1) = \frac{1}{13}$.

Calculate $P(K2 | K1)$:

This is the probability that the second card is a King, given that the first card drawn was a King and was not replaced.

After the first draw (which was a King), there are now $52 - 1 = 51$ cards remaining in the deck.

Since the first card was a King, there is now $4 - 1 = 3$ Kings remaining among the 51 cards.

$$P(K2 | K1) = \frac{\text{Number of Kings remaining}}{\text{Total number of cards remaining}} = \frac{3}{51}$$

... (iii)

Simplifying, $P(K2 | K1) = \frac{1}{17}$.

Apply the Multiplication Rule:

Substitute the values from (ii) and (iii) into formula (i):

$$P(K1 \cap K2) = P(K1) \cdot P(K2 | K1) = \frac{4}{52} \times \frac{3}{51}$$

$$P(K1 \cap K2) = \frac{\cancel{4}^{1}}{\cancel{52}_{13}} \times \frac{\cancel{3}^{1}}{\cancel{51}_{17}}$$

($4/52 = 1/13, 3/51 = 1/17$)

$$P(K1 \cap K2) = \frac{1}{13} \times \frac{1}{17} = \frac{1}{13 \times 17}$$

$$P(K1 \cap K2) = \frac{1}{221}$$

... (iv)

The probability that both cards drawn are Kings is $\frac{1}{221}$.



Independent Events: Definition and Test


Definition

In probability, two events, $A$ and $B$, associated with the same random experiment are defined as **statistically independent** (or simply independent) if the occurrence or non-occurrence of one event does **not affect** the probability of the occurrence of the other event. Knowing whether one event happened or not does not change the likelihood of the other event happening.

Formally, event $A$ is independent of event $B$ if and only if:

$$P(A|B) = P(A) \quad \text{if } P(B)>0$$

... (1)

This definition states that the conditional probability of A given B is the same as the marginal (unconditional) probability of A. Similarly, event $B$ is independent of event $A$ if and only if:

$$P(B|A) = P(B) \quad \text{if } P(A)>0$$

... (2)

If $P(A)>0$ and $P(B)>0$, then $P(A|B)=P(A)$ if and only if $P(B|A)=P(B)$. The relationship of independence is symmetric.

If two events are not independent, they are called **dependent events**. For dependent events, the probability of one event occurring changes based on whether the other event has occurred (i.e., $P(A|B) \neq P(A)$ or $P(B|A) \neq P(B)$).

By convention, if $P(A)=0$ or $P(B)=0$, the events A and B are generally considered independent.


Test for Independence (Multiplication Rule Test)

The definition of independence using conditional probability ($P(A|B)=P(A)$) is useful conceptually, but it requires calculating a conditional probability. A more practical and widely used way to test if two events A and B are independent is by checking if their joint probability (the probability of their intersection) is equal to the product of their individual probabilities. This is derived from the multiplication rule.

Recall the general multiplication rule: $P(A \cap B) = P(A) \cdot P(B|A)$. If A and B are independent, $P(B|A) = P(B)$. Substituting this into the general rule gives $P(A \cap B) = P(A) \cdot P(B)$.

Conversely, if $P(A \cap B) = P(A) \cdot P(B)$, then (assuming $P(A)>0$) $P(B|A) = \frac{P(A \cap B)}{P(A)} = \frac{P(A)P(B)}{P(A)} = P(B)$, confirming independence.

Therefore, events A and B are independent **if and only if**:

$$P(A \cap B) = P(A) \cdot P(B)$$

... (3)

This equation serves as the primary test for statistical independence between two events.

Procedure to Test for Independence using the Multiplication Rule:

  1. Calculate the probability of event A, $P(A)$.
  2. Calculate the probability of event B, $P(B)$.
  3. Calculate the probability that both events A and B occur simultaneously, $P(A \cap B)$.
  4. Calculate the product of the individual probabilities: $P(A) \cdot P(B)$.
  5. Compare the results from step 3 and step 4:
    • If $P(A \cap B) = P(A) \cdot P(B)$, the events A and B are **independent**.
    • If $P(A \cap B) \neq P(A) \cdot P(B)$, the events A and B are **dependent**.

Independence of More Than Two Events

For more than two events, the concept of independence needs careful definition. For three events A, B, and C to be **mutually independent**, it is not sufficient for them to be pairwise independent. Mutual independence requires that the probability of the intersection of any subset of these events is equal to the product of their individual probabilities.

For three events A, B, and C to be mutually independent, all of the following conditions must be satisfied:

If only the first three conditions hold, the events are said to be **pairwise independent**, but not necessarily mutually independent.


Example

Example 1. A fair six-sided die is rolled once. Let E be the event 'the number appearing is a multiple of 3' and F be the event 'the number appearing is an even number'. Are events E and F independent?

Answer:

Given: A fair die roll. Event E: multiple of 3. Event F: even number.

To Test: Are E and F independent?

Solution:

The sample space for rolling a fair die is $S = \{1, 2, 3, 4, 5, 6\}$. The total number of equally likely outcomes is $n(S) = 6$.

Let E be the event 'the number appearing is a multiple of 3'.

Outcomes in E are the multiples of 3 in S: $E = \{3, 6\}$. Number of outcomes in E, $n(E) = 2$.

Probability of E, $P(E) = \frac{n(E)}{n(S)} = \frac{2}{6} = \frac{1}{3}$.

$$P(E) = \frac{1}{3}$$

... (i)

Let F be the event 'the number appearing is an even number'.

Outcomes in F are the even numbers in S: $F = \{2, 4, 6\}$. Number of outcomes in F, $n(F) = 3$.

Probability of F, $P(F) = \frac{n(F)}{n(S)} = \frac{3}{6} = \frac{1}{2}$.

$$P(F) = \frac{1}{2}$$

... (ii)

Now, consider the intersection event $E \cap F$, which means 'the number is a multiple of 3 AND an even number'.

The outcomes common to E and F are: $E \cap F = \{3, 6\} \cap \{2, 4, 6\} = \{6\}$. Number of outcomes in $E \cap F$, $n(E \cap F) = 1$.

Probability of $E \cap F$, $P(E \cap F) = \frac{n(E \cap F)}{n(S)} = \frac{1}{6}$.

$$P(E \cap F) = \frac{1}{6}$$

... (iii)

Now, we use the multiplication rule test for independence: Is $P(E \cap F) = P(E) \cdot P(F)$?

Calculate the product of the individual probabilities $P(E) \cdot P(F)$ from (i) and (ii):

$$P(E) \cdot P(F) = \frac{1}{3} \cdot \frac{1}{2} = \frac{1}{6}$$

... (iv)

Compare the result from the intersection probability (iii) and the product of individual probabilities (iv):

$P(E \cap F) = \frac{1}{6}$ and $P(E) \cdot P(F) = \frac{1}{6}$.

Since $P(E \cap F) = P(E) \cdot P(F)$, the events E and F are **independent**.

Interpretation: Knowing that the number rolled is a multiple of 3 (event E) does not change the probability that the number is even (event F), and vice versa. The knowledge that you got a 3 or a 6 does not alter the base chance of getting a 2, 4, or 6.




Partition of Sample Space


Definition

In probability theory, a set of events $E_1, E_2, \dots, E_n$ is said to form a **partition** of the sample space $S$ if they collectively cover the entire sample space without overlapping. Formally, a collection of events $\{E_1, E_2, \dots, E_n\}$ constitutes a partition of $S$ if the following three conditions are satisfied:

  1. Mutually Exclusive (Pairwise Disjoint):

    Any two distinct events in the collection are mutually exclusive. This means that if one event occurs, none of the other events in the collection can occur simultaneously in a single trial of the experiment.

    $$E_i \cap E_j = \phi \quad \text{for all } i \neq j, \text{ where } i, j \in \{1, 2, \dots, n\}$$

    ... (1)

  2. Exhaustive:

    The union of all the events in the collection is equal to the entire sample space. This means that at least one of these events must occur whenever the experiment is performed; there are no outcomes in $S$ that are not included in at least one of the $E_i$'s.

    $$E_1 \cup E_2 \cup \dots \cup E_n = \bigcup\limits_{i=1}^{n} E_i = S$$

    ... (2)

  3. Non-Zero Probability (Standard Assumption for many theorems):

    Each event in the partition is usually assumed to have a positive probability of occurring.

    $$P(E_i) > 0 \quad \text{for all } i \in \{1, 2, \dots, n\}$$

    ... (3)

    While conditions 1 and 2 strictly define a partition, condition 3 is often added as a requirement for partitions used in theorems like the Law of Total Probability and Bayes' Theorem, to avoid division by zero.

Visually, a partition divides the sample space $S$ into a set of non-overlapping regions that together fill up $S$.

Venn Diagram showing a partition of sample space S

Examples of Partitions

The concept of partitioning the sample space into a set of mutually exclusive and exhaustive events is fundamental to solving many probability problems, especially those involving sequential events or multiple stages, as it forms the basis for the Law of Total Probability and Bayes' Theorem.



The Law of Total Probability


Statement

The **Law of Total Probability** is a fundamental theorem that relates the probability of an event to the conditional probabilities of that event given a partition of the sample space. It is used to calculate the overall probability of an event when the conditions under which it can occur are divided into several mutually exclusive and exhaustive scenarios.

Let $\{E_1, E_2, \dots, E_n\}$ be a partition of the sample space $S$. This means the events $E_i$ are pairwise mutually exclusive ($E_i \cap E_j = \phi$ for $i \neq j$), their union is the entire sample space ($\bigcup\limits_{i=1}^{n} E_i = S$), and we assume $P(E_i) > 0$ for all $i=1, \dots, n$.

Let $A$ be any event associated with the sample space $S$. The Law of Total Probability states that the probability of event A can be calculated as the sum of the conditional probabilities of A given each event in the partition ($P(A|E_i)$), weighted by the probability of each event in the partition ($P(E_i)$).

Formula for the Law of Total Probability:

$$P(A) = P(E_1) P(A|E_1) + P(E_2) P(A|E_2) + \dots + P(E_n) P(A|E_n)$$

... (1)

Using summation notation, this is:

$$P(A) = \sum\limits_{i=1}^{n} P(E_i) P(A|E_i)$$

... (2)


Derivation and Intuition

The Law of Total Probability can be derived using the properties of partitions and the multiplication rule:

Since $\{E_1, E_2, \dots, E_n\}$ is a partition of $S$, any event $A$ within $S$ can be expressed as the union of its intersections with each event in the partition.

$$A = (A \cap E_1) \cup (A \cap E_2) \cup \dots \cup (A \cap E_n)$$

... (iii)

Venn Diagram illustrating the Law of Total Probability

Because the events $E_i$ are pairwise mutually exclusive, their intersections with event A, $(A \cap E_i)$, are also pairwise mutually exclusive. For any $i \neq j$, $(A \cap E_i) \cap (A \cap E_j) = A \cap A \cap E_i \cap E_j = A \cap \phi = \phi$.

Since the events $(A \cap E_1), (A \cap E_2), \dots, (A \cap E_n)$ are mutually exclusive, by Axiom 3 (Additivity of probability for mutually exclusive events), the probability of their union is the sum of their individual probabilities:

$$P(A) = P(A \cap E_1) + P(A \cap E_2) + \dots + P(A \cap E_n)$$

... (iv)

Now, recall the general Multiplication Rule (Formula 2 from Section I1): $P(A \cap E_i) = P(E_i) P(A|E_i)$ (assuming $P(E_i)>0$). Substitute this into each term of the sum in equation (iv):

$$P(A) = P(E_1) P(A|E_1) + P(E_2) P(A|E_2) + \dots + P(E_n) P(A|E_n)$$

... (v)

This completes the derivation of the Law of Total Probability.

The intuition is that the overall probability of event A is the weighted average of the conditional probabilities of A under each scenario $E_i$, where the weights are the probabilities of those scenarios occurring.


Purpose and Application

The Law of Total Probability is particularly useful in multi-stage experiments or when calculating the probability of an event $A$ that can occur under several different, mutually exclusive circumstances or conditions. You use this law when:

It provides a way to break down the calculation of a complex probability $P(A)$ into simpler calculations involving conditional probabilities related to a partition of the sample space. It is a fundamental stepping stone to understanding and applying Bayes' Theorem.


Example

Example 1. Urn I contains 2 white and 3 black balls. Urn II contains 4 white and 1 black ball. A fair coin is tossed. If it lands Heads, Urn I is chosen, and a ball is drawn. If it lands Tails, Urn II is chosen, and a ball is drawn. What is the probability that the ball drawn is white?

Answer:

Given: Two urns with different compositions of balls. Urn selection depends on a coin toss.

To Find: Probability of drawing a white ball.

Solution:

Let $S$ be the sample space of the entire experiment (coin toss and ball draw).

Let $E_1$ be the event that Urn I is chosen (occurs if the coin is Heads).

Let $E_2$ be the event that Urn II is chosen (occurs if the coin is Tails).

Since a fair coin is tossed, the probability of Heads or Tails is $1/2$. Thus, the probabilities of choosing the urns are:

$$P(E_1) = P(\text{Heads}) = \frac{1}{2}$$

... (i)

$$P(E_2) = P(\text{Tails}) = \frac{1}{2}$$

... (ii)

The events $E_1$ and $E_2$ form a partition of the sample space $S$:

  • They are mutually exclusive (you choose either Urn I or Urn II, not both). $E_1 \cap E_2 = \phi$.
  • They are exhaustive (you must choose either Urn I or Urn II). $E_1 \cup E_2 = S$.
  • Their probabilities are positive ($1/2 > 0$).

Let $W$ be the event that the ball drawn is white. We want to find $P(W)$.

We can apply the Law of Total Probability, partitioning the sample space based on which urn is chosen ($E_1$ or $E_2$). The formula for $P(W)$ using this partition is:

$$P(W) = P(E_1) P(W|E_1) + P(E_2) P(W|E_2)$$

... (iii)

We need to find the conditional probabilities $P(W|E_1)$ and $P(W|E_2)$.

Calculate $P(W|E_1)$: This is the probability of drawing a white ball given that Urn I was chosen.

Urn I contains 2 white balls and 3 black balls, for a total of $2+3=5$ balls.

$$P(W|E_1) = \frac{\text{Number of white balls in Urn I}}{\text{Total number of balls in Urn I}} = \frac{2}{5}$$

... (iv)

Calculate $P(W|E_2)$: This is the probability of drawing a white ball given that Urn II was chosen.

Urn II contains 4 white balls and 1 black ball, for a total of $4+1=5$ balls.

$$P(W|E_2) = \frac{\text{Number of white balls in Urn II}}{\text{Total number of balls in Urn II}} = \frac{4}{5}$$

... (v)

Apply the Law of Total Probability:

Substitute the probabilities from (i), (ii), (iv), and (v) into formula (iii):

$$P(W) = P(E_1) P(W|E_1) + P(E_2) P(W|E_2)$$

$$P(W) = \left(\frac{1}{2}\right) \times \left(\frac{2}{5}\right) + \left(\frac{1}{2}\right) \times \left(\frac{4}{5}\right)$$

$$P(W) = \frac{2}{10} + \frac{4}{10}$$

$$P(W) = \frac{2+4}{10} = \frac{6}{10}$$

$$P(W) = \frac{\cancel{6}^{3}}{\cancel{10}_{5}} = \frac{3}{5}$$

... (vi)

The probability that the ball drawn is white is $\frac{3}{5}$.